What is the difference between the purpose of usability and user experience evaluation methods?
نویسنده
چکیده
There are different interpretations of user experience that lead to different scopes of measure. The ISO definition suggests measures of user experience are similar to measures of satisfaction in usability. A survey at Nokia showed that user experience was interpreted in a similar way to usability, but with the addition of anticipation and hedonic responses. CHI 2009 SIG participants identified not just measurement methods, but methods that help understanding of how and why people use products. A distinction can be made between usability methods that have the objective of improving human performance, and user experience methods that have the objective of improving user satisfaction with achieving both pragmatic and hedonic goals. Sometimes the term “user experience” is used to refer to both approaches. DEFINITIONS OF USABILITY AND USER EXPERIENCE There has been a lot of recent debate about the scope of user experience, and how it should be defined [5]. The definition of user experience in ISO FDIS 9241-210 is: A person's perceptions and responses that result from the use and/or anticipated use of a product, system or service. This contrasts with the revised definition of usability in ISO FDIS 9241-210: Extent to which a system, product or service can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Both these definitions suggest that usability or user experience can be measured during or after use of a product, system or service. A person's “perceptions and responses” in the definition of user experience are similar to the concept of satisfaction in usability. From this perspective, measures of user experience can be encompassed within the 3-component model of usability [1], particularly when the experience is task-related. A weakness of both definitions is that they are not explicitly concerned with time. Just as the ISO 9241-11 definition of usability has nothing to say about learnability (where usability changes over time), so the ISO 9241-210 definition of user experience has nothing to say about the way user experience evolves from expectation, through actual interaction, to a total experience that includes reflection on the experience [7]. USER EXPERIENCE NEEDS IN DESIGN AND DEVELOPMENT Ketola and Roto [4] surveyed the needs for information on user experience in Nokia, asking senior staff: Which User Experience information (measurable data gained from our target users directly or indirectly), is useful for your organization? How? 21 needs were identified from 18 respondents who worked in Research, Development, Care, and Quality. Ketola and Roto categorised the responses in terms of the area measured: UX lifecycle, retention, use of functions, breakdowns, customer care, localization, device performance and new technology. In Table 1, the needs have been recategorized by type of measure. It is clear that most of the measures are common to conventional approaches to user centred design, but three measures are specific to user experience: • The impact of expected UX to purchase decisions • Continuous excitement • Why and when the user experiences frustration? USER EXPERIENCE EVALUATION METHODS At the CHI 2009 SIG: “User Experience Evaluation – Do You Know Which Method to Use?” [6] [8], participants were asked to describe user experience evaluation methods that they used. 36 methods were collected (including the example methods presented by the organizers). These have been categorised in Table 2 by the type of evaluation context, and the type of data collected. There was very little mention of using measures specific to user experience, particularly from industry participants. It seems that industry's interpretation of user experience evaluation methods is much broader, going beyond conventional evaluation to encompass methods that collect information that helps design for user experience. In that sense user experience evaluation seems to be interpreted as user centred design methods for achieving user experience. The differentiating factor from more traditional usability work is thus a wider end goal: not just achieving effectiveness, efficiency and satisfaction, but optimising the whole user experience from expectation through actual interaction to reflection on the experience. DIFFERENCES BETWEEN USABILITY AND USER EXPERIENCE Although there is no fundamental difference between measures of usability and measures of user experience at a particular point in time, the difference in emphasis between task performance and pleasure leads to different concerns during development. In the context of user centred design, typical usability concerns include: Measurement category Measurement type Measure Area measured Anticipation Pre-purchase Anticipated use The impact of expected UX to purchase decisions UX lifecycle Overall usability First use Effectiveness Success of taking the product into use UX lifecycle Product upgrade Effectiveness Success in transferring content from old device to the new device UX lifecycle Expectations vs. reality Satisfaction Has the device met your expectations? Retention Long term experience Satisfaction Are you satisfied with the product quality (after 3 months of use) Retention Hedonic Engagement Pleasure Continuous excitement Retention UX Obstacles Frustration Why and when the user experiences frustration? Breakdowns Detailed usability Use of device functions How used What functions are used, how often, why, how, when, where? Use of functions Malfunction Technical problems Amount of “reboots” and severe technical problems experienced. Breakdowns Usability problems Usability problems Top 10 usability problems experienced by the customers. Breakdowns Effect of localization Satisfaction with localisation How do users perceive content in their local language? Localization Latencies Satisfaction with device performance Perceived latencies in key tasks. Device performance Performance Satisfaction with device performance Perceived UX on device performance Device performance Perceived complexity Satisfaction with task complexity Actual and perceived complexity of task accomplishments. Device performance User differences Previous devices Previous user experience Which device you had previously? Retention Differences in user groups User differences How different user groups access features? Use of functions Reliability of product planning User differences Comparison of target users vs. actual buyers? Use of functions Support Customer experience in “touchpoints” Satisfaction with support How does customer think & feel about the interaction in the touch points? Customer care Accuracy of support information Consequences of poor support Does inaccurate support information result in product returns? How? Customer care Innovation feedback User wish list New user ideas & innovations triggered by new experiences New technologies Impact of use Change in user behaviour How the device affects user behaviour How are usage patterns changing when new technologies are introduced New technologies Table 1. Categorisation of usability measures reported in [4] 1. Designing for and evaluating overall effectiveness and efficiency. 2. Designing for and evaluating user comfort and satisfaction. 3. Designing to make the product easy to use, and evaluating the product in order to identify and fix usability problems. 4. When relevant, the temporal aspect leads to a concern for learnability. In the context of user centred design, typical user experience concerns include: 1. Understanding and designing the user’s experience with a product: the way in which people interact with a product over time: what they do and why. 2. Maximising the achievement of the hedonic goals of stimulation, identification and evocation and associated emotional responses. Sometimes the two sets of issues are contrasted as usability and user experience. But some organisations would include both under the common umbrella of user experience. Evaluation context Lab tests Lab study with mind maps Paper prototyping Field tests Product / Tool Comparison Competitive evaluation of prototypes in the wild Field observation Long term pilot study Longitudinal comparison Contextual Inquiry Observation/Post Interview Activity Experience Sampling Longitudinal Evaluation Ethnography Field observations Longitudinal Studies Evaluation of groups Evaluating collaborative user experiences, Instrumented product TRUE Tracking Realtime User Experience Domain specific Nintendi Wii Children OPOS Outdoor Play Observation Scheme This-or-that Approaches Evaluating UX jointly with usability Evaluation data User opinion/interview Lab study with mind maps Quick and dirty evaluation Audio narrative Retrospective interview Contextual Inquiry Focus groups evaluation Observation \ Post Interview Activity Experience Sampling Sensual Evaluation Instrument Contextual Laddering Interview ESM User questionnaire Survey Questions Emocards Experience sampling triggered by events, SAM Magnitude Estimation TRUE Tracking Realtime User Experience Questionnaire (e.g. AttrakDiff) Human responses PURE preverbal user reaction evaluation Psycho-physiological measurements Expert evaluation Expert evaluation Heuristic matrix Perspective-Based Inspection Table2. User experience evaluation methods (CHI 2009 SIG) CONCLUSIONS The scope of user experience The concept of user experience both broadens: • The range of human responses that would be measured to include pleasure. • The circumstances in which they would be measured to include anticipated use and reflection on use. Equally importantly the goal to achieve improved user experience over the whole lifecycle of user involvement with the product leads to increased emphasis on use of methods that help understand what can be done to improve this experience through the whole lifecycle of user involvement. However, notably absent from any of the current surveys or initiatives, is a concern with requirements. User experience seems to be following in the footsteps of other fields where a focus on evaluation has preceded a concern with establishing criteria for what would be acceptable results of evaluation. User experience and usability The notes that accompany the definition of user experience in ISO FDIS 9241-210 show some ambivalence as to whether usability is part of user experience, stating “User experience includes all the users’ emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviours and accomplishments that occur before, during and after use.” If user experience includes all behaviour, it presumably includes the user’s effectiveness and efficiency. This seems consistent with the methods nominated by many people in industry [4], [8] who appear to have subsumed usability within user experience. In contrast, researchers working in the field consider user experience to be entirely subjective, e.g. “The objective measures such as task execution time and the number of clicks or errors are not valid measures for UX, but we need to understand how the user feels about the system.” [8] In summary, user experience can be conceptualised in different ways: 1. An elaboration of the satisfaction component of usability [1]. 2. Distinct from usability, which has a historical emphasis on user performance [8]. 3. An umbrella term for all the user’s perceptions and responses, whether measured subjectively or objectively [3]. Regardless of the terminology used, there are two distinct objectives: • Optimising human performance. • Optimising user satisfaction with achieving both pragmatic and hedonic goals. The relative importance of using methods to support these two goals will depend on the specific product and design objectives. Methods for optimising user satisfaction with achieving both pragmatic and hedonic goals can be categorised as: 1. Methods to evaluate and design for the hedonic goals of stimulation, identification and evocation and associated emotional responses. 2. Methods to evaluate and design for the user’s perception of achievement of pragmatic goals associated with task success. 3. Methods that support the design of the user’s experience (including setting requirements and understanding the context of use).
منابع مشابه
Usability evaluation of the user interface in electronic prescribing systems of Iran Health Insurance Organization and Social Security Organization
Introduction: The e-prescribing system is one of the basic technologies in the health system structure which was developed with the aim of properly managing healthcare resources and services, preventing common manual prescribing errors, and increasing patient safety. Given that the user interface of e-prescribing system is considered as the main factor of user acceptance, the purpose of the pre...
متن کاملارزیابی کاربردپذیری سامانه مدیریت کتابخانههای عمومی کشور (سامان) بر اساس اصول دهگانه نیلسون
Purpose: evaluation of the user interface of the management system of Iran Public Libraries Foundation (Saman) is the main aim of the paper. Saman is a newly developed web based and integrated library software that seemingly works as a library OPAC. Methodology: This research is an applied study and tries to investigate the usability standards of Saman website through heuristic evaluation met...
متن کاملHeuristic Evaluation of Picture Archiving and Communication Systems (PACS)
Introduction: Poor User Interface design can be one of the probable reasons of users’ error and reduced incentive in using health information systems such as PACS. Heuristic evaluation is one the methods for assuring the proper user interface design of health information systems such as PACS. Method: This study was a descriptive-analytic research conducted in 2019 through using Nielsen usabilit...
متن کاملHeuristic Evaluation of Picture Archiving and Communication Systems (PACS)
Introduction: Poor User Interface design can be one of the probable reasons of users’ error and reduced incentive in using health information systems such as PACS. Heuristic evaluation is one the methods for assuring the proper user interface design of health information systems such as PACS. Method: This study was a descriptive-analytic research conducted in 2019 through using Nielsen usabilit...
متن کاملبررسی میزان قابلیت استفاده سیستمهای اطلاعات بیمارستانی از نظر پرستاران، کاربران واحدهای پاراکلینیک و منشی بخشها: تهران1388
Introduction: User satisfaction is a key factor for the success of any information system. Evaluation of hospital information systems HIS is valueless without analyzing users' satisfaction. The purpose of this study was to survey the viewpoints of nurses, secretaries and paraclinic users about the usability of HIS in selected hospitals. Method: This cross-sectional study was conducted in 2009. ...
متن کاملEvaluation of the Usability of Admission and Medical Record Information System: A Heuristic Evaluation
Introduction: Admission and medical record system (AMRS) is one of the most important subsystems of hospital information system, which is used by many users in admission, discharge, and health information management. Interface usability problems can reduce user speed, precision, and efficiency in user-system interaction. This study aimed to identify the usability problems of AMRS in a hospital ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2009